Goto

Collaborating Authors

 adversarial ml threat matrix


Cyberattacks against machine learning systems and the new Adversarial ML Threat Matrix - Securezoo Blog

#artificialintelligence

In the wake of an increase in cyber attacks against machine learning (ML) systems, Microsoft along with MITRE and contributions from 11 other organizations, have released the Adversarial ML Threat Matrix. The Adversarial ML Threat Matrix is an open ATT&CK-style framework to help security analysts detect, respond to, and remediate threats against ML systems. Machine learning (ML) is often seen as a subset of artificial intelligence (AI) and is based on the ability of systems to automatically learn and improve from its experience. Many industries, such as finance, healthcare, and defense, have used ML to transform their businesses and positively impact people worldwide. With the ML and AI advancements, however, Microsoft warned that many organizations have not kept up on security of their ML systems.


Anti-adversarial machine learning defenses start to take root

#artificialintelligence

Much of the anti-adversarial research has been on the potential for minute, largely undetectable alterations to images (researchers generally refer to these as "noise perturbations") that cause AI's machine learning (ML) algorithms to misidentify or misclassify the images. Adversarial tampering can be extremely subtle and hard to detect, even all the way down to pixel-level subliminals. If an attacker can introduce nearly invisible alterations to image, video, speech, or other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively. This is no idle threat. Eliciting false algorithmic inferences can cause an AI-based app to make incorrect decisions, such as when a self-driving vehicle misreads a traffic sign and then turns the wrong way or, in a worst-case scenario, crashes into a building, vehicle, or pedestrian.


Why Increasing Instances Of Adversarial Attacks Are Concerning?

#artificialintelligence

Artificial intelligence (AI) is a technology that mimics human intelligence and computational skills. Today, AI tools are employed to find efficiencies, improve decision making, and offer better end-user experiences. It is also used to fight and prevent cybersecurity threats too. AI and its subtype machine learning (ML) is used by companies to analyze network traffic for anomalous and suspicious activity. However, there are certain limitations when it comes to applying these tools to security.


mitre/advmlthreatmatrix

#artificialintelligence

For corrections and improvement or to contribute a case study, see Feedback.


Microsoft & Others Catalog Threats to Machine Learning Systems

#artificialintelligence

In May 2016, Microsoft introduce a chatbot on Twitter, dubbed "Tay," that attempted to hold conversations with users and improve its responses through machine learning (ML). A coordinated attack on the chatbot, however, caused the algorithm to start tweeting "wildly inappropriate and reprehensible words and images" in the first 24 hours, Microsoft stated at the time. For the software giant, the attack demonstrated that the world of ML and artificial intelligence (AI) would come with threats. Last week, the company and an interdisciplinary group of security professionals and ML researchers from a dozen other organizations took a first stab at creating a vocabulary for describing attacks on ML systems with the initial draft of the Adversarial ML Threat Matrix. The threat matrix is an extension of MITRE's ATT&CK framework for the classification of attack techniques.


A new threat matrix outlines attacks against machine learning systems - Help Net Security

#artificialintelligence

A report published last year has noted that most attacks against artificial intelligence (AI) systems are focused on manipulating them (e.g., influencing recommendation systems to favor specific content), but that new attacks using machine learning (ML) are within attackers' capabilities. Microsoft now says that attacks on machine learning (ML) systems are on the uptick and MITRE notes that, in the last three years, "major companies such as Google, Amazon, Microsoft, and Tesla, have had their ML systems tricked, evaded, or misled." At the same time, most businesses don't have the right tools in place to secure their ML systems and are looking for guidance. Experts at Microsoft, MITRE, IBM, NVIDIA, the University of Toronto, the Berryville Institute of Machine Learning and several other companies and educational organizations have therefore decided to create the first version of the Adversarial ML Threat Matrix, to help security analysts detect and respond to this new type of threat. Machine learning is a subset of artificial intelligence (AI).


The security threat of adversarial machine learning is real

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. With machine learning becoming increasingly popular, one thing that has been worrying experts is the security threats the technology will entail. We are still exploring the possibilities: The breakdown of autonomous driving systems? Failure of deep learning–based biometric authentication? Meanwhile, machine learning algorithms have already found their way into critical fields such as finance, health care, and transportation, where security failures can have severe repercussion.


Cyberattacks against machine learning systems are more common than you think - Microsoft Security

#artificialintelligence

Machine learning (ML) is making incredible transformations in critical areas such as finance, healthcare, and defense, impacting nearly every aspect of our lives. Many businesses, eager to capitalize on advancements in ML, have not scrutinized the security of their ML systems. Today, along with MITRE, and contributions from 11 organizations including IBM, NVIDIA, Bosch, Microsoft is releasing the Adversarial ML Threat Matrix, an industry-focused open framework, to empower security analysts to detect, respond to, and remediate threats against ML systems. During the last four years, Microsoft has seen a notable increase in attacks on commercial ML systems. Market reports are also bringing attention to this problem: Gartner's Top 10 Strategic Technology Trends for 2020, published in October 2019, predicts that "Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems."


New Framework Released to Protect Machine Learning Systems From Adversarial Attacks

#artificialintelligence

Microsoft, in collaboration with MITRE, IBM, NVIDIA, and Bosch, has released a new open framework that aims to help security analysts detect, respond to, and remediate adversarial attacks against machine learning (ML) systems. Called the Adversarial ML Threat Matrix, the initiative is an attempt to organize the different techniques employed by malicious adversaries in subverting ML systems. Just as artificial intelligence (AI) and ML are being deployed in a wide variety of novel applications, threat actors can not only abuse the technology to power their malware but can also leverage it to fool machine learning models with poisoned datasets, thereby causing beneficial systems to make incorrect decisions, and pose a threat to stability and safety of AI applications. Indeed, ESET researchers last year found Emotet -- a notorious email-based malware behind several botnet-driven spam campaigns and ransomware attacks -- to be using ML to improve its targeting. Then earlier this month, Microsoft warned about a new Android ransomware strain that included a machine learning model that, while yet to be integrated into the malware, could be used to fit the ransom note image within the screen of the mobile device without any distortion.


Microsoft wants to make sure we don't fall victim to murderous AI

#artificialintelligence

Anyone worried about the threat of a Skynet-esque rise of the machines may be able to rest a little easier after the release of new protective measures designed to avoid a potential AI uprising. The nonprofit MITRE Corporation has teamed up with 12 top technology companies, including the likes of Microsoft, IBM and Nvidia to launch the Adversarial ML Threat Matrix. The group says the system is an open framework created to help security analysts spot, alert, respond to and address threats targeting machine learning (ML) systems. Microsoft says the release was motivated by a continuing growth in the number of attacks against commercial ML sytems around the world. The company surveyed a selection of 28 major businesses, finding that almost all are still unaware of the threat that adversarial machine learning can pose, with twenty-five out of the 28 saying that they don't have the right tools in place to secure their ML systems.